Skip to main content

Understanding tiCrypt Infrastructure: Components, Connectivity, and Deployment Options

· 7 min read
Thomas Samant
Thomas Samant

Planning a tiCrypt deployment starts with understanding the infrastructure that powers it. This guide walks through the core components, how they connect, and the deployment architectures available — from a lightweight demo system to a full-scale production environment with batch processing.

Note: This guide covers infrastructure planning and setup. The tiCrypt installation and software deployment process is covered separately.


Core Components

tiCrypt is built from a set of modular components, each with a distinct role. Here's what powers the platform.

tiCrypt Backend

The backend is the heart of the system, composed of 11 services. The most critical ones include:

  • ticrypt-rest — The HTTPS entry point for the entire system. All other services depend on it. It runs behind Nginx as a reverse-proxied virtual domain.
  • ticrypt-auth — Handles authentication, authorization, and serves as the global coordinator across all backend services.
  • ticrypt-vm — Manages the full virtual machine lifecycle, including advanced features like SLURM integration for batch processing.
  • ticrypt-logger — Maintains a tamper-resistant, blockchain-structured relational log of all system activity, designed for processing by tiCrypt Audit.
  • ticrypt-proxy — Creates secure tunnels between users and their VMs, enabling RDP sessions, application access, and other connectivity.

tiCrypt Audit

tiCrypt Audit is a dedicated system for processing logs, generating reports and alerts, and running ad hoc queries. It is designed around three principles:

  • Isolation — Audit does not require direct access to the tiCrypt backend. The backend pushes live logs to Audit over port 25000, but the reverse path does not exist. This means security teams can use Audit without gaining access to any other part of the system.
  • Full History — Audit retains logs for the lifetime of the deployment. The complete system history can be reconstructed at any point in the future.
  • High Performance — Built on ClickHouse with specialized data-loading techniques, most ad hoc queries return in under a second. Individual reports export in 2–10 seconds, and generating thousands of reports takes only minutes.

Data Ingress

tiCrypt provides two mechanisms for securely acquiring data from external sources:

  • ticrypt-sftp — SFTP-based data ingestion. Requires an HTTPS endpoint and an SFTP port (22 or 2022).
  • ticrypt-mailbox — Web-based data ingestion. Requires an HTTPS endpoint.

Both services share the same underlying architecture and are intentionally deployed outside the secure infrastructure perimeter. This allows external collaborators to submit data without accessing the secure system. However, both require a network path to the tiCrypt backend REST interface — an important consideration if the backend sits behind a VPN.

Virtual Machine Hosting

tiCrypt manages one or more VM hosts with varying configurations of memory, CPU cores, and GPUs. The hardware does not need to be uniform across hosts.

VM hosts run secure, tiCrypt-managed virtual machines that interact with the backend and, in a tightly controlled manner, with each other. Direct internet access is not required — only connectivity to the backend server.

VM performance depends on three factors: host hardware, the distributed filesystem, and network speed. For production environments, high-performance storage and fast networking are essential.

Batch Processing with SLURM

tiCrypt supports batch processing through SLURM integration via a dedicated component called tiCrypt-host-manager, which coordinates between SLURM and the tiCrypt backend.

SLURM hosts require the same filesystem access and backend connectivity as standard VM hosts. While it's possible to run both interactive VMs and SLURM workloads on the same host, separating them simplifies the setup.


Setup Requirements

Connectivity

ConnectionRequirement
ticrypt-vm → VM HostsSSH access for VM lifecycle management
ticrypt-proxy → VM HostsAccess to ports 5900–6256
ticrypt-logger → tiCrypt AuditAccess to port 25000
ticrypt-sftp / ticrypt-mailbox → ticrypt-restAccess to the HTTPS frontend
ticrypt-rest, Audit, sftp, mailboxEach requires its own Nginx frontend for HTTPS

DNS and Certificates

Each HTTPS-enabled service requires a dedicated virtual domain and its own TLS certificate. Multi-domain certificates are not recommended, as they are considered less secure. A suggested naming convention:

ServiceSubdomain Example
ticrypt-restbackend.my_system.my_domain.edu
tiCrypt Auditaudit.my_system.my_domain.edu
ticrypt-sftpsftp.my_system.my_domain.edu
ticrypt-mailboxmailbox.my_system.my_domain.edu

Port Access

All HTTPS traffic is served through Nginx with virtual domains and reverse-proxied to local ports (typically 8080–8084).

ServicePort(s)Notes
ticrypt-restHTTPS → 8080Port 443 open to users
ticrypt-proxy6000–6100Same visibility as port 443
tiCrypt Audit25000 (logs), HTTPS → 8081Port 443 open to admins
ticrypt-sftp2022 (SFTP), HTTPS → 8082Port 443 open to the world
ticrypt-mailboxHTTPS → 8083Port 443 open to the world
SSH22Management access and Libvirt on VM hosts
VM Hosts5900–6256Port forwarding from the backend

Storage

ServiceMount PointMinimum Size
ticrypt-rest/storage/vault100 GB+
VM Hosts / ticrypt-vm/storage/libvirt1 TB+
tiCrypt Audit/var/clickhouse10 GB+

Storage needs scale with usage. Large deployments can reach 10 TB+ for the vault and 1 PB+ for VM disk images.


Deployment Architectures

tiCrypt scales from a single-server demo to a multi-node production cluster. Below are the most common configurations.

Single Server (Demo/Test)

Everything — backend services, Audit, data ingress, and VM hosting — runs on one machine. This is suitable for demos and testing only, not production use.

Minimum specs: 32 cores (with virtualization extensions), 128 GB RAM, 1 TB storage.

Small Production System

A three-node setup that separates concerns for reliability and access control:

RoleSpecsAccess
ticrypt-sftp + ticrypt-mailbox (VM)4 cores, 16 GB RAM, 100 GB storageWorld-facing
tiCrypt Audit (VM)2+ cores, 16 GB+ RAM, 100 GB+ storageAdmin/security teams
Backend + VM hosting (server)64+ cores, 512 GB RAM, 10 TB+ storageInternal

Storage is locally attached to the backend server.

Production System with Interactive VMs

This architecture separates the backend from dedicated VM hosts for better scalability:

RoleSpecsAccess
ticrypt-sftp + ticrypt-mailbox (VM)4 cores, 16 GB RAM, 100 GB storageWorld-facing
tiCrypt Audit (VM)8+ cores, 64 GB+ RAM, 1 TB+ storageAdmin/security teams
Backend (server or VM)32 cores, 128 GB RAMInternal
VM hosts (vm1, vm2, …)Varies by workloadInternal

Production System with Interactive VMs and Batch Processing

The most comprehensive deployment adds SLURM nodes alongside interactive VM hosts:

RoleNotes
ticrypt-sftp + ticrypt-mailboxWorld-facing VM
tiCrypt AuditAdmin/security-access VM
BackendDedicated server or VM
VM hosts (vm1, vm2, …)Libvirt for interactive VMs
SLURM hosts (slurm1, slurm2, …)SLURM + Libvirt for batch VMs

This configuration scales to a large number of SLURM nodes. Special interactive VMs manage the formation and security of private SLURM clusters on top of the global SLURM scheduler — which is why direct, high-performance connectivity between VM hosts and SLURM hosts is required.

Small SLURM Demo System

A variation of the single-server setup with added SLURM capacity:

  • One server runs all tiCrypt components plus interactive VM hosting.
  • Two or more additional SLURM hosts handle batch processing.

Flexible by Design

tiCrypt's modular architecture means there is no single "correct" deployment. A research group running a handful of interactive VMs on a single server and a large institution operating hundreds of SLURM batch nodes across a dedicated cluster are both valid configurations. The same components simply scale and redistribute across available infrastructure. Storage backends, VM host hardware, and network topology can all vary to match what your environment already provides. As requirements evolve, components like additional VM hosts or SLURM nodes can be introduced without redesigning the existing setup.